Sample Size Lower Bounds in PAC Learning by Algorithmic Complexity Theory

نویسندگان

  • Bruno Apolloni
  • Claudio Gentile
چکیده

This paper focuses on a general setup for obtaining sample size lower bounds for learning concept classes under fixed distribution laws in an extended PAC learning framework. These bounds do not depend on the running time of learning procedures and are information-theoretic in nature. They are based on incompressibility methods drawn from Kolmogorov Complexity and Algorithmic Probability theories. @ 1998-Elsevier Science B.V. All rights reserved

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

PAC Reinforcement Learning Bounds for RTDP and Rand-RTDP

Real-time Dynamic Programming (RTDP) is a popular algorithm for planning in a Markov Decision Process (MDP). It can also be viewed as a learning algorithm, where the agent improves the value function and policy while acting in an MDP. It has been empirically observed that an RTDP agent generally performs well when viewed this way, but past theoretical results have been limited to asymptotic con...

متن کامل

PAC Reinforcement Learning Bounds for RTDP and Rand-RTDP Technical Report

Real-time Dynamic Programming (RTDP) is a popular algorithm for planning in a Markov Decision Process (MDP). It can also be viewed as a learning algorithm, where the agent improves the value function and policy while acting in an MDP. It has been empirically observed that an RTDP agent generally performs well when viewed this way, but past theoretical results have been limited to asymptotic con...

متن کامل

On the Sample Complexity of Noise-Tolerant Learning

In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of Ω ( log(1/δ) ε(1−2η) ) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise...

متن کامل

The Optimal Sample Complexity of PAC Learning

This work establishes a new upper bound on the number of samples sufficient for PAC learning in the realizable case. The bound matches known lower bounds up to numerical constant factors. This solves a long-standing open problem on the sample complexity of PAC learning. The technique and analysis build on a recent breakthrough by Hans Simon.

متن کامل

Sharp bounds for population recovery

The population recovery problem is a basic problem in noisy unsupervised learning that has attracted significant research attention in recent years [WY12, DRWY12, MS13, BIMP13, LZ15, DST16]. A number of different variants of this problem have been studied, often under assumptions on the unknown distribution (such as that it has restricted support size). In this work we study the sample complexi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Theor. Comput. Sci.

دوره 209  شماره 

صفحات  -

تاریخ انتشار 1998